skip to main content


Search for: All records

Creators/Authors contains: "Fang, Kuai"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Process-based modelling offers interpretability and physical consistency in many domains of geosciences but struggles to leverage large datasets efficiently. Machine-learning methods, especially deep networks, have strong predictive skills yet are unable to answer specific scientific questions. In this Perspective, we explore differentiable modelling as a pathway to dissolve the perceived barrier between process-based modelling and machine learning in the geosciences and demonstrate its potential with examples from hydrological modelling. ‘Differentiable’ refers to accurately and efficiently calculating gradients with respect to model variables or parameters, enabling the discovery of high-dimensional unknown relationships. Differentiable modelling involves connecting (flexible amounts of) prior physical knowledge to neural networks, pushing the boundary of physics-informed machine learning. It offers better interpretability, generalizability, and extrapolation capabilities than purely data-driven machine learning, achieving a similar level of accuracy while requiring less training data. Additionally, the performance and efficiency of differentiable models scale well with increasing data volumes. Under data-scarce scenarios, differentiable models have outperformed machine-learning models in producing short-term dynamics and decadal-scale trends owing to the imposed physical constraints. Differentiable modelling approaches are primed to enable geoscientists to ask questions, test hypotheses, and discover unrecognized physical relationships. Future work should address computational challenges, reduce uncertainty, and verify the physical significance of outputs. 
    more » « less
    Free, publicly-accessible full text available July 11, 2024
  2. Abstract

    When fitting statistical models to variables in geoscientific disciplines such as hydrology, it is a customary practice to stratify a large domain into multiple regions (or regimes) and study each region separately. Traditional wisdom suggests that models built for each region separately will have higher performance because of homogeneity within each region. However, each stratified model has access to fewer and less diverse data points. Here, through two hydrologic examples (soil moisture and streamflow), we show that conventional wisdom may no longer hold in the era of big data and deep learning (DL). We systematically examined an effect we calldata synergy, where the results of the DL models improved when data were pooled together from characteristically different regions. The performance of the DL modelsbenefitedfrom modest diversity in the training data compared to a homogeneous training set, even with similar data quantity. Moreover, allowing heterogeneous training data makes eligible much larger training datasets, which is an inherent advantage of DL. A large, diverse data set is advantageous in terms of representing extreme events and future scenarios, which has strong implications for climate change impact assessment. The results here suggest the research community should place greater emphasis on data sharing.

     
    more » « less
  3. The Soil Moisture Active Passive (SMAP) mission measures important soil moisture data globally. SMAP's products might not always perform better than land surface models (LSM) when evaluated against in situ measurements. However, we hypothesize that SMAP presents added value for long-term soil moisture estimation in a data fusion setting as evaluated by in situ data. Here, with the help of a time series deep learning (DL) method, we created a seamlessly extended SMAP data set to test this hypothesis and, importantly, gauge whether such benefits extend to years beyond SMAP's limited lifespan. We first show that the DL model, called long short-term memory (LSTM), can extrapolate SMAP for several years and the results are similar to the training period. We obtained prolongation results with low-performance degradation where SMAP itself matches well with in situ data. Interannual trends of root-zone soil moisture are surprisingly well captured by LSTM. In some cases, LSTM's performance is limited by SMAP, whose main issue appears to be its shallow sensing depth. Despite this limitation, a simple average between LSTM and an LSM Noah frequently outperforms Noah alone. Moreover, Noah combined with LSTM is more skillful than when it is combined with another LSM. Over sparsely instrumented sites, the Noah-LSTM combination shows a stronger edge. Our results verified the value of LSTM-extended SMAP data. Moreover, DL is completely data driven and does not require structural assumptions. As such, it has its unique potential for long-term projections and may be applied synergistically with other model-data integration techniques. 
    more » « less
  4. Nowcasts, or near-real-time (NRT) forecasts, of soil moisture based on the Soil Moisture Active and Passive (SMAP) mission could provide substantial value for a range of applications including hazards monitoring and agricultural planning. To provide such a NRT forecast with high fidelity, we enhanced a time series deep learning architecture, long short-term memory (LSTM), with a novel data integration (DI) kernel to assimilate the most recent SMAP observations as soon as they become available. The kernel is adaptive in that it can accommodate irregular observational schedules. Testing over the CONUS, this NRT forecast product showcases predictions with unprecedented accuracy when evaluated against subsequent SMAP retrievals. It showed smaller error than NRT forecasts reported in the literature, especially at longer forecast latency. The comparative advantage was due to LSTM’s structural improvements, as well as its ability to utilize more input variables and more training data. The DI-LSTM was compared to the original LSTM model that runs without data integration, referred to as the projection model here. We found that the DI procedure removed the autocorrelated effects of forcing errors and errors due to processes not represented in the inputs, for example, irrigation and floodplain/lake inundation, as well as mismatches due to unseen forcing conditions. The effects of this purely data-driven DI kernel are discussed for the first time in the geosciences. Furthermore, this work presents an upper-bound estimate for the random component of the SMAP retrieval error.

     
    more » « less
  5. Abstract

    Recent observations with varied schedules and types (moving average, snapshot, or regularly spaced) can help to improve streamflow forecasts, but it is challenging to integrate them effectively. Based on a long short‐term memory (LSTM) streamflow model, we tested multiple versions of a flexible procedure we call data integration (DI) to leverage recent discharge measurements to improve forecasts. DI accepts lagged inputs either directly or through a convolutional neural network unit. DI ubiquitously elevated streamflow forecast performance to unseen levels, reaching a record continental‐scale median Nash‐Sutcliffe Efficiency coefficient value of 0.86. Integrating moving‐average discharge, discharge from the last few days, or even average discharge from the previous calendar month could all improve daily forecasts. Directly using lagged observations as inputs was comparable in performance to using the convolutional neural network unit. Importantly, we obtained valuable insights regarding hydrologic processes impacting LSTM and DI performance. Before applying DI, the base LSTM model worked well in mountainous or snow‐dominated regions, but less well in regions with low discharge volumes (due to either low precipitation or high precipitation‐energy synchronicity) and large interannual storage variability. DI was most beneficial in regions with high flow autocorrelation: it greatly reduced baseflow bias in groundwater‐dominated western basins and also improved peak prediction for basins with dynamical surface water storage, such as the Prairie Potholes or Great Lakes regions. However, even DI cannot elevate performance in high‐aridity basins with 1‐day flash peaks. Despite this limitation, there is much promise for a deep‐learning‐based forecast paradigm due to its performance, automation, efficiency, and flexibility.

     
    more » « less
  6. Abstract

    Recently, recurrent deep networks have shown promise to harness newly available satellite‐sensed data for long‐term soil moisture projections. However, to be useful in forecasting, deep networks must also provide uncertainty estimates. Here we evaluated Monte Carlo dropout with an input‐dependent data noise term (MCD+N), an efficient uncertainty estimation framework originally developed in computer vision, for hydrologic time series predictions. MCD+N simultaneously estimates a heteroscedastic input‐dependent data noise term (a trained error model attributable to observational noise) and a network weight uncertainty term (attributable to insufficiently constrained model parameters). Although MCD+N has appealing features, many heuristic approximations were employed during its derivation, and rigorous evaluations and evidence of its asserted capability to detect dissimilarity were lacking. To address this, we provided an in‐depth evaluation of the scheme's potential and limitations. We showed that for reproducing soil moisture dynamics recorded by the Soil Moisture Active Passive (SMAP) mission, MCD+N indeed gave a good estimate of predictive error, provided that we tuned a hyperparameter and used a representative training data set. The input‐dependent term responded strongly to observational noise, while the model term clearly acted as a detector for physiographic dissimilarity from the training data, behaving as intended. However, when the training and test data were characteristically different, the input‐dependent term could be misled, undermining its reliability. Additionally, due to the data‐driven nature of the model, data noise also influences network weight uncertainty, and therefore the two uncertainty terms are correlated. Overall, this approach has promise, but care is needed to interpret the results.

     
    more » « less
  7. Abstract. Recently, deep learning (DL) has emerged as a revolutionary andversatile tool transforming industry applications and generating new andimproved capabilities for scientific discovery and model building. Theadoption of DL in hydrology has so far been gradual, but the field is nowripe for breakthroughs. This paper suggests that DL-based methods can open up acomplementary avenue toward knowledge discovery in hydrologic sciences. Inthe new avenue, machine-learning algorithms present competing hypotheses thatare consistent with data. Interrogative methods are then invoked to interpretDL models for scientists to further evaluate. However, hydrology presentsmany challenges for DL methods, such as data limitations, heterogeneityand co-evolution, and the general inexperience of the hydrologic field withDL. The roadmap toward DL-powered scientific advances will require thecoordinated effort from a large community involving scientists and citizens.Integrating process-based models with DL models will help alleviate datalimitations. The sharing of data and baseline models will improve theefficiency of the community as a whole. Open competitions could serve as theorganizing events to greatly propel growth and nurture data science educationin hydrology, which demands a grassroots collaboration. The area ofhydrologic DL presents numerous research opportunities that could, in turn,stimulate advances in machine learning as well.

     
    more » « less